Fairness-Aware Classifier with Prejudice Remover Regularizer
نویسندگان
چکیده
With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect individuals’ lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be nondiscriminatory and fair in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency.
منابع مشابه
Analysing the Fairness of Fairness-aware Classifiers
Calders and Verwer’s two-naive-Bayes is one of fairness-aware classifiers, which classify objects while excluding the influence of a specific information. We analyze why this classifier achieves very high level of the fairness, and show that this is due to a decision rules and a model bias. Based on these findings, we develop methods that are grounded on rigid theory and are applicable to wider...
متن کاملOn Fairness, Diversity and Randomness in Algorithmic Decision Making
Consider a binary decision making process where a single machine learning classifier replaces a multitude of humans. We raise questions about the resulting loss of diversity in the decision making process. We study the potential benefits of using random classifier ensembles instead of a single classifier in the context of fairness-aware learning and demonstrate various attractive properties: (i...
متن کاملFairness-Aware Learning with Restriction of Universal Dependency using f-Divergences
Fairness-aware learning is a novel framework for classification tasks. Like regular empirical risk minimization (ERM), it aims to learn a classifier with a low error rate, and at the same time, for the predictions of the classifier to be independent of sensitive features, such as gender, religion, race, and ethnicity. Existing methods can achieve low dependencies on given samples, but this is n...
متن کاملA Brief Review of Ayurveda*
There is much to be desired for the promotion of greater understanding and appreciation of Ayurveda in the Western world. This is an attempt on the part of a Western mind to present an objective study of the basic concepts of this basic Science with a view to dispel Western prejudice. The fact that Ayurveda, by virtue of its unique efficacy, has been recognized by the WHO should remover further...
متن کاملA Developmental Science Approach to Reducing Prejudice and Social Exclusion: Intergroup Processes, Social-Cognitive Development, and Moral Reasoning
This article presents a developmental science approach to changing attitudes and rectifying prejudice and discrimination. This is crucial because stereotypes and prejudicial attitudes are deeply entrenched by adulthood; the time for intervention is before biases are fully formed in adulthood. Adults as well as children are both the recipients and the perpetrators of prejudice as reflected by so...
متن کامل